40 research outputs found

    INDIVIDUAL DIFFERENCES IN BRAIN ACTIVITIES WHEN HUMAN WISHES TO LISTEN TO MUSIC CONTINUOUSLY USING NEAR-INFRARED SPECTROSCOPY

    Get PDF
    This paper introduces an individual difference in the activities of the prefrontal cortex when a person wants to listen to music using near-infrared spectroscopy. The individual differences are confirmed by visualizing the variation in oxygenated hemoglobin level. The sensing positions used to record the brain activities are around the prefrontal cortex. The existence of individual differences was verified by experiments. The experiment results show that active positions while feeling a wish to listen to music are different in each subject, and an oxygenated hemoglobin level is different in each subject compared to its value when a subject does not feel the wish to listen to music. The experiment results show that it is possible to detect a wish to listen to the music based on changes in the oxygenated hemoglobin level. Also, these results suggest that active positions are different in each subject because the sensitivities and how to feel on stimulus are different. Lastly, the results suggest that it is possible to express the individual differences as differences in active positions

    EEG Analysis Method to Detect Unspoken Answers to Questions Using MSNNs

    Get PDF
    Brain–computer interfaces (BCI) facilitate communication between the human brain and computational systems, additionally offering mechanisms for environmental control to enhance human life. The current study focused on the application of BCI for communication support, especially in detecting unspoken answers to questions. Utilizing a multistage neural network (MSNN) replete with convolutional and pooling layers, the proposed method comprises a threefold approach: electroencephalogram (EEG) measurements, EEG feature extraction, and answer classification. The EEG signals of the participants are captured as they mentally respond with “yes” or “no” to the posed questions. Feature extraction was achieved through an MSNN composed of three distinct convolutional neural network models. The first model discriminates between the EEG signals with and without discernible noise artifacts, whereas the subsequent two models are designated for feature extraction from EEG signals with or without such noise artifacts. Furthermore, a support vector machine is employed to classify the answers to the questions. The proposed method was validated via experiments using authentic EEG data. The mean and standard deviation values for sensitivity and precision of the proposed method were 99.6% and 0.2%, respectively. These findings demonstrate the viability of attaining high accuracy in a BCI by preliminarily segregating the EEG signals based on the presence or absence of artifact noise and underscore the stability of such classification. Thus, the proposed method manifests prospective advantages of separating EEG signals characterized by noise artifacts for enhanced BCI performance

    Japanese sign language classification based on gathered images and neural networks

    Get PDF
    This paper proposes a method to classify words in Japanese Sign Language (JSL). This approach employs a combined gathered image generation technique and a neural network with convolutional and pooling layers (CNNs). The gathered image generation generates images based on mean images. Herein, the maximum difference value is between blocks of mean and JSL motions images. The gathered images comprise blocks that having the calculated maximum difference value. CNNs extract the features of the gathered images, while a support vector machine for multi-class classification, and a multilayer perceptron are employed to classify 20 JSL words. The experimental results had 94.1% for the mean recognition accuracy of the proposed method. These results suggest that the proposed method can obtain information to classify the sample words

    Japanese sign language classification based on gathered images and neural networks

    Get PDF
    This paper proposes a method to classify words in Japanese Sign Language (JSL). This approach employs a combined gathered image generation technique and a neural network with convolutional and pooling layers (CNNs). The gathered image generation generates images based on mean images. Herein, the maximum difference value is between blocks of mean and JSL motions images. The gathered images comprise blocks that having the calculated maximum difference value. CNNs extract the features of the gathered images, while a support vector machine for multi-class classification, and a multilayer perceptron are employed to classify 20 JSL words. The experimental results had 94.1% for the mean recognition accuracy of the proposed method. These results suggest that the proposed method can obtain information to classify the sample words

    Japanese Janken Recognition by Support Vector Machine Based on Electromyogram of Wrist

    Get PDF
    We propose a method which can discriminate hand motions in this paper. We measure an electromyogram (EMG) of wrist by using 8 dry type sensors. We focus on four motions, such as "Rock-Scissors-Paper" and "Neutral". "Neutral" is a state that does not do anything. In the proposed method, we apply fast Fourier transformation (FFT) to measured EMG data, and then remove a hum noise. Next, we combine values of sensors based on a Gaussian function. In this Gaussian function, variance and mean are 0.2 and 0, respectively. We then apply normalization by linear transformation to the values. Subsequently, we resize the values into the range from -1 to 1. Finally, a support vector machine (SVM) conducts learning and discrimination to classify them. We conducted experiments with seven subjects. Average of discrimination accuracy was 89.8%. In the previous method, the discrimination accuracy was 77.1%. Therefore, the proposed method is better in accuracy than the previous method. In future work, we will conduct an experiment which discriminates Japanese Janken of a subject who is not learned

    Development of Eye Mouse Using EOG signals and Learning Vector Quantization Method

    Get PDF
    Recognition of eye motions has attracted more and more attention of researchers all over the world in recent years. Compared with other body movements, eye motion is responsive and needs a low consumption of physical strength. In particular, for patients with severe physical disabilities, eye motion is the last spontaneous motion for them to make a respond. In order to provide an efficient means of communication for patients such as ALS (amyotrophic lateral sclerosis) who cannot move even their muscles except eye, in this paper we proposed a system that uses EOG signals and Learning Vector Quantization algorithm to recognize eye motions. According to recognition results, we use API (application programming interface) to control cursor movements. This system would be used as a means of communication to help ALS patients

    Novel Approximate Statistical Algorithm for Large Complex Datasets

    Get PDF
    In the field of pattern recognition, principal component analysis (PCA) is one of the most well-known feature extraction methods for reducing the dimensionality of high-dimensional datasets. Simple-PCA (SPCA), which is a faster version of PCA, performs effectively with iterative operated learning. However, SPCA might not be efficient when input data are distributed in a complex manner because it learns without using the class information in the dataset. Thus, SPCA cannot be said to be optimal from the perspective of feature extraction for classification. In this study, we propose a new learning algorithm that uses the class information in the dataset. Eigenvectors spanning the eigenspace of the dataset are produced by calculating the data variations within each class. We present our proposed algorithm and discuss the results of our experiments that used UCI datasets to compare SPCA and our proposed algorithm

    Lost Property Detection by Template Matching using Genetic Algorithm and Random Search

    Get PDF
    In this paper, we propose an object search method which is adapted to transformation of an object to be searched to detect lost property. Object search is divided into two types; global and local searches. We used a template matching using Genetic Algorithm (GA) in the global search. Moreover we use a random search in the local search. According to experimental results, this system can detect rough position of the object to be searched. The search accuracy obtained using the present method is 83.6%, and that of a comparative experiment using only GA is 42.1%. We have verified that our proposed method is effective for lost property detection. In the future, we need to increase search accuracy to search objects more stably. In particular, we need to improve local search

    ドライバ ノ ウンテン ドウサ ニ モトズク コジン トクセイ オ コウリョ シタ キケン ウンテン ヨソク システム ノ コウチク

    Get PDF
    Many car accidents are caused by driver’s deviation from normal condition like carelessness. We aim to construct a driving assist system that is able to detect driver’s deviation signal from normal condition. The system detects the deviation signal using driver’s time-series head motion information. In this paper, we optimize categorization of driver’s head motion using two kinds of unsupervised neural networks: Self-Organizing Maps (SOMs) and Fuzzy Adaptive Resonance Theory. Moreover, we introduce a method to analyze an individual difference in an electroencephalogram (EEG) using a SOM. In order to confirm the individual difference in the EEG, we conduct experiments using real EEG data. The experimental results suggest that it is possible to express the individual difference using SOM

    Preference Analysis Method Applying Relationship between Electroencephalogram Activities and Egogram in Prefrontal Cortex Activities : How to collaborate between engineering techniques and psychology

    Get PDF
    This paper introduces a method of preference analysis based on electroencephalogram (EEG) analysis of prefrontal cortex activity. The proposed method applies the relationship between EEG activity and the Egogram. The EEG senses a single point and records readings by means of a dry-type sensor and a small number of electrodes. The EEG analysis adapts the feature mining and the clustering on EEG patterns using a self-organizing map (SOM). EEG activity of the prefrontal cortex displays individual difference. To take the individual difference into account, we construct a feature vector for input modality of the SOM. The input vector for the SOM consists of the extracted EEG feature vector and a human character vector, which is the human character quantified through the ego analysis using psychological testing. In preprocessing, we extract the EEG feature vector by calculating the time average on each frequency band: θ, low-β, and high-β. To prove the effectiveness of the proposed method, we perform experiments using real EEG data. These results show that the accuracy rate of the EEG pattern classification is higher than it was before the improvement of the input vector
    corecore